Gradient-Free Training of Autoencoders for Non-Differentiable Communication Channels

نویسندگان

چکیده

Training of autoencoders using the back-propagation algorithm is challenging for non-differential channel models or in an experimental environment where gradients cannot be computed. In this paper, we study a gradient–free training method based on cubature Kalman filter. To numerically validate method, autoencoder employed to perform geometric constellation shaping differentiable communication channels, showing same performance as algorithm. Further investigation done non–differentiable that includes: laser phase noise, additive white Gaussian noise and blind search-based compensation. Our results indicate can successfully optimized proposed achieve better robustness residual with respect standard schemes such Quadrature Amplitude Modulation Iterative Polar considered conditions.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Convergence of gradient based pre-training in Denoising autoencoders

The success of deep architectures is at least in part attributed to the layer-by-layer unsupervised pre-training that initializes the network. Various papers have reported extensive empirical analysis focusing on the design and implementation of good pre-training procedures. However, an understanding pertaining to the consistency of parameter estimates, the convergence of learning procedures an...

متن کامل

Metric-Free Natural Gradient for Joint-Training of Boltzmann Machines

This paper introduces the Metric-Free Natural Gradient (MFNG) algorithm for training Boltzmann Machines. Similar in spirit to the Hessian-Free method of Martens [8], our algorithm belongs to the family of truncated Newton methods and exploits an efficient matrix-vector product to avoid explicitly storing the natural gradient metric L. This metric is shown to be the expected second derivative of...

متن کامل

Training Deep AutoEncoders for Collaborative Filtering

Œis paper proposes a model for the rating prediction task in recommender systems which signi€cantly outperforms previous stateof-the art models on a time-split Netƒix data set. Our model is based on deep autoencoder with 6 layers and is trained end-to-end without any layer-wise pre-training. We empirically demonstrate that: a) deep autoencoder models generalize much beŠer than the shallow ones,...

متن کامل

Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training

Large-scale distributed training requires significant communication bandwidth for gradient exchange that limits the scalability of multi-node training, and requires expensive high-bandwidth network infrastructure. The situation gets even worse with distributed training on mobile devices (federated learning), which suffers from higher latency, lower throughput, and intermittent poor connections....

متن کامل

Training Stacked Denoising Autoencoders for Representation Learning

We implement stacked denoising autoencoders, a class of neural networks that are capable of learning powerful representations of high dimensional data. We describe stochastic gradient descent for unsupervised training of autoencoders, as well as a novel genetic algorithm based approach that makes use of gradient information. We analyze the performance of both optimization algorithms and also th...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Journal of Lightwave Technology

سال: 2021

ISSN: ['0733-8724', '1558-2213']

DOI: https://doi.org/10.1109/jlt.2021.3103339